124 research outputs found

    Geoscience of Climate and Energy 8. Climate Models: Are They Compatible with Geological Constraints on Earth System Processes?

    Get PDF
    Models of the global warming process are obliged to include many important small-scale processes that cannot be explicitly represented, given the low spatial and temporal resolution at which the models can be integrated. As the parameterizations of these processes, in terms of the resolved scale fields, must be tuned to permit them to enable the models to fit climate observations from the instrumental era, it is an issue as to whether such models remain robust when they are applied to the prediction of future warming trends. Geological inferences of past climate conditions provide a means by which the robustness of the models may be assessed. In this paper, two examples of such tests are described, both of which demonstrate that a state-of-the-art model is able to accurately simulate past conditions that differ radically from modern. SOMMAIRE Les modèles de processus de réchauffement climatique de la planète doivent intégrer de nombreux et importants processus à petite échelle, lesquels ne peuvent être représentés de façon explicite, étant donné la faible résolution spatiale et temporelle des modèles climatiques. Comme le paramétrage de ces processus, à leur échelle propre, doit être ajusté de manière à permettre aux modèles de correspondre à l'observation du climat de l'aire instrumentale, il faut savoir si de tels modèles peuvent demeurer robustes lorsqu’ils sont utilisés pour prévoir les tendances de réchauffement à venir. Or, les inférences climatiques passées déduites de situations géologiques constituent un moyen de tester la robustesse des modèles. Dans le présent article, deux exemples de ces tests sont décrits, et ceux-ci montrent que les modèles actuels sont capables de reproduire avec précision des conditions climatiques anciennes très différentes des conditions actuelles

    Low-cost sensors for the measurement of atmospheric composition: overview of topic and future applications

    Get PDF
    Measurement of reactive air pollutants and greenhouse gases underpin a huge variety of applications that span from academic research through to regulatory functions and services for individuals, governments, and businesses. Whilst the vast majority of these observations continue to use established analytical reference methods, miniaturization has led to a growth in the prominence of a generation of devices that are often described generically as “low-cost sensors” (LCSs). LCSs can in practice have other valuable features other than cost that differentiate them from previous technologies including being of smaller size, lower weight and having reduced power consumption. Different technologies falling within this class include passive electrochemical and metal oxide sensors that may have costs of only a few dollars each, through to more complex microelectromechanical devices that use the same analytical principles as reference instruments, but in smaller size and power packages. As a class of device, low-cost sensors encompass a very wide range of technologies and as a consequence they produce a wide range of quality of measurements. When selecting a LCS approach for a particular task, users need to ensure the specific sensor to be used will meet application’s data quality requirements. This report considers sensors that are designed for the measurement of atmospheric composition at ambient concentrations focusing on reactive gaseous air pollutants (CO, NOx, O3, SO2), particulate matter (PM) and greenhouse gases CO2 and CH4. It examines example applications where new scientific and technical insight may potentially be gained from using a network of sensors when compared to more sparsely located observations. Access to low-cost sensors appears to offer exciting new atmospheric applications, can support new services and potentially facilitates the inclusion of a new cohort of users. Based on the scientific literature available up to the end of 2017, it is clear however that some trade-offs arise when LCSs are used in place of existing reference methods. Smaller and/or lower cost devices tend to be less sensitive, less precise and less chemically-specific to the compound or variable of interest. This is balanced by a potential increase in the spatial density of measurements that can be achieved by a network of sensors. The current state of the art in terms of accuracy, reliability and reproducibility of a range of different sensors is described along with the key analytical principles and what has been learned so far about low-cost sensors from both laboratory studies and real-world tests. A summary of concepts is included on how sensors and reference instruments may be used together, as well as with modelling in a complementary way, to improve data quality and generate additional insight into pollution behaviour. The report provides some advice on key considerations when matching a project/study/application with an appropriate sensor monitoring strategy, and the wider application-specific requirements for calibration and data quality. The report contains a number of suggestions on future requirements for low-cost sensors aimed at manufacturers and users and for the broader atmospheric community. The report highlights that low-cost sensors are not currently a direct substitute for reference instruments, especially for mandatory purposes; they are however a complementary source of information on air quality, provided an appropriate sensor is used. It is important for prospective users to identify their specific application needs first, examine examples of studies or deployments that share similar characteristics, identify the likely limitations associated with using LCSs and then evaluate whether their selected LCS approach/technology would sufficiently meet the needs of the measurement objective. Previous studies in both the laboratory and field have shown that data quality from LCSs are highly variable and there is no simple answer to basic questions like “are low-cost sensors reliable?”. Even when the same basic sensor components are used, real-world performance can vary due to different data correction and calibration approaches. This can make the task of understanding data quality very challenging for users, since good or bad performance demonstrated from one device or commercial supplier does not mean that similar devices from others will work the same way. Manufacturers should provide information on their characterizations of sensors and sensor system performance in a manner that is as comprehensive as possible, including results from in-field testing. Reporting of that data should where possible parallel the metrics used for reference instrument specifications, including information on the calibration conditions. Whilst not all users will actively use this information it will support the general development framework for LCS use. Openness in assessment of sensor performance across varying environmental conditions would be very valuable in guiding new user applications and help the field develop more rapidly. Users and operators of low-cost sensors should have a clearly-defined application scope and set of questions they wish to address prior to selection of a sensor approach. This will guide the selection of the most appropriate technology to support a project. Renewed efforts are needed to enhance engagement and sharing of knowledge and skills between the data science community, the atmospheric science community and others to improve LCS data processing and analysis methods. Improved information sharing between manufacturers and user communities should be supported through regular dialogue on emerging issues related to sensor performance, best practice and applications. Adoption of open access and open data policies to further facilitate the development, applications, and use of LCS data is essential. Such practices would facilitate exchange of information among the wide range of interested communities including national/local government, research, policy, industry, and public, and encourage accountability for data quality and any resulting advice derived from LCS data. This assessment was initiated at the request of the WMO Commission for Atmospheric Sciences (CAS) and supported by broader stakeholder atmospheric community including the International Global Atmospheric Chemistry (IGAC) project, Task Force on Measurement and Modelling of the European Monitoring and Evaluation Programme of the LRTAP Convention, UN Environment, World Health Organization, Network of Air Quality Reference Laboratories of the European Commission (AQUILA)

    Late Holocene sea- and land-level change on the U.S. southeastern Atlantic coast

    Get PDF
    Late Holocene relative sea-level (RSL) reconstructions can be used to estimate rates of land-level (subsidence or uplift) change and therefore to modify global sea-level projections for regional conditions. These reconstructions also provide the long-term benchmark against which modern trends are compared and an opportunity to understand the response of sea level to past climate variability. To address a spatial absence of late Holocene data in Florida and Georgia, we reconstructed ~ 1.3 m of RSL rise in northeastern Florida (USA) during the past ~ 2600 years using plant remains and foraminifera in a dated core of high salt-marsh sediment. The reconstruction was fused with tide-gauge data from nearby Fernandina Beach, which measured 1.91 ± 0.26 mm/year of RSL rise since 1900 CE. The average rate of RSL rise prior to 1800 CE was 0.41 ± 0.08 mm/year. Assuming negligible change in global mean sea level from meltwater input/removal and thermal expansion/contraction, this sea-level history approximates net land-level (subsidence and geoid) change, principally from glacio-isostatic adjustment. Historic rates of rise commenced at 1850–1890 CE and it is virtually certain (P = 0.99) that the average rate of 20th century RSL rise in northeastern Florida was faster than during any of the preceding 26 centuries. The linearity of RSL rise in Florida is in contrast to the variability reconstructed at sites further north on the U.S. Atlantic coast and may suggest a role for ocean dynamic effects in explaining these more variable RSL reconstructions. Comparison of the difference between reconstructed rates of late Holocene RSL rise and historic trends measured by tide gauges indicates that 20th century sea-level trends along the U.S. Atlantic coast were not dominated by the characteristic spatial fingerprint of melting of the Greenland Ice Sheet

    Towards an automated analysis of bacterial peptidoglycan structure.

    Get PDF
    Peptidoglycan (PG) is an essential component of the bacterial cell envelope. This macromolecule consists of glycan chains alternating N-acetylglucosamine and N-acetylmuramic acid, cross-linked by short peptides containing nonstandard amino acids. Structural analysis of PG usually involves enzymatic digestion of glycan strands and separation of disaccharide peptides by reversed-phase HPLC followed by collection of individual peaks for MALDI-TOF and/or tandem mass spectrometry. Here, we report a novel strategy using shotgun proteomics techniques for a systematic and unbiased structural analysis of PG using high-resolution mass spectrometry and automated analysis of HCD and ETD fragmentation spectra with the Byonic software. Using the PG of the nosocomial pathogen Clostridium difficile as a proof of concept, we show that this high-throughput approach allows the identification of all PG monomers and dimers previously described, leaving only disambiguation of 3-3 and 4-3 cross-linking as a manual step. Our analysis confirms previous findings that C. difficile peptidoglycans include mainly deacetylated N-acetylglucosamine residues and 3-3 cross-links. The analysis also revealed a number of low abundance muropeptides with peptide sequences not previously reported. Graphical Abstract The bacterial cell envelope includes plasma membrane, peptidoglycan, and surface layer. Peptidoglycan is unique to bacteria and the target of the most important antibiotics; here it is analyzed by mass spectrometry

    A census of atmospheric variability from seconds to decades

    Get PDF
    This paper synthesizes and summarizes atmospheric variability on time scales from seconds to decades through a phenomenological census. We focus mainly on unforced variability in the troposphere, stratosphere, and mesosphere. In addition to atmosphere-only modes, our scope also includes coupled modes, in which the atmosphere interacts with the other components of the Earth system, such as the ocean, hydrosphere, and cryosphere. The topics covered include turbulence on time scales of seconds and minutes, gravity waves on time scales of hours, weather systems on time scales of days, atmospheric blocking on time scales of weeks, the Madden–Julian Oscillation on time scales of months, the Quasi-Biennial Oscillation and El Niño–Southern Oscillation on time scales of years, and the North Atlantic, Arctic, Antarctic, Pacific Decadal, and Atlantic Multidecadal Oscillations on time scales of decades. The paper serves as an introduction to a special collection of Geophysical Research Letters on atmospheric variability. We hope that both this paper and the collection will serve as a useful resource for the atmospheric science community and will act as inspiration for setting future research directions

    Deploying a Top-100 Supercomputer for Large Parallel Workloads: the Niagara Supercomputer

    Full text link
    Niagara is currently the fastest supercomputer accessible to academics in Canada. It was deployed at the beginning of 2018 and has been serving the research community ever since. This homogeneous 60,000-core cluster, owned by the University of Toronto and operated by SciNet, was intended to enable large parallel jobs and has a measured performance of 3.02 petaflops, debuting at #53 in the June 2018 TOP500 list. It was designed to optimize throughput of a range of scientific codes running at scale, energy efficiency, and network and storage performance and capacity. It replaced two systems that SciNet operated for over 8 years, the Tightly Coupled System (TCS) and the General Purpose Cluster (GPC). In this paper we describe the transition process from these two systems, the procurement and deployment processes, as well as the unique features that make Niagara a one-of-a-kind machine in Canada.Comment: PEARC'19: "Practice and Experience in Advanced Research Computing", July 28-August 1, 2019, Chicago, IL, US

    Klima. 30 pitanja za razumijevanje Konferencije u Parizu (Pascal Canfin i Peter Staime)

    Get PDF
    The last deglaciation, which marked the transition between the last glacial and present interglacial periods, was punctuated by a series of rapid (centennial and decadal) climate changes. Numerical climate models are useful for investigating mechanisms that underpin the climate change events, especially now that some of the complex models can be run for multiple millennia. We have set up a Paleoclimate Modelling Intercomparison Project (PMIP) working group to coordinate efforts to run transient simulations of the last deglaciation, and to facilitate the dissemination of expertise between modellers and those engaged with reconstructing the climate of the last 21 000 years. Here, we present the design of a coordinated Core experiment over the period 21–9 thousand years before present (ka) with time-varying orbital forcing, greenhouse gases, ice sheets and other geographical changes. A choice of two ice sheet reconstructions is given, and we make recommendations for prescribing ice meltwater (or not) in the Core experiment. Additional focussed simulations will also be coordinated on an ad hoc basis by the working group, for example to investigate more thoroughly the effect of ice meltwater on climate system evolution, and to examine the uncertainty in other forcings. Some of these focussed simulations will target shorter durations around specific events in order to understand them in more detail and allow for the more computationally expensive models to take part

    Persistent elastic behavior above a megathrust rupture patch: Nias island, West Sumatra

    Get PDF
    We quantify fore-arc deformation using fossil reefs to test the assumption commonly made in seismic cycle models that anelastic deformation of the fore arc is negligible. Elevated coral microatolls, paleoreef flats, and chenier plains show that the Sumatran outer arc island of Nias has experienced a complex pattern of relatively slow long-term uplift and subsidence during the Holocene epoch. This same island rose up to 2.9 m during the Mw 8.7 Sunda megathrust rupture in 2005. The mismatch between the 2005 and Holocene uplift patterns, along with the overall low rates of Holocene deformation, reflects the dominance of elastic strain accumulation and release along this section of the Sunda outer arc high and the relatively subordinate role of upper plate deformation in accommodating long-term plate convergence. The fraction of 2005 uplift that will be retained permanently is generally <4% for sites that experienced more than 0.25 m of coseismic uplift. Average uplift rates since the mid-Holocene range from 1.5 to −0.2 mm/a and are highest on the eastern coast of Nias, where coseismic uplift was nearly zero in 2005. The pattern of long-term uplift and subsidence is consistent with slow deformation of Nias along closely spaced folds in the north and trenchward dipping back thrusts in the southeast. Low Holocene tectonic uplift rates provide for excellent geomorphic and stratigraphic preservation of the mid-Holocene relative sea level high, which was under way by ∼7.3 ka and persisted until ∼2 ka
    corecore